8 research outputs found

    Comparison of Various Machine Learning Models for Estimating Construction Projects Sales Valuation Using Economic Variables and Indices

    Get PDF
    The capability of various machine learning techniques in predicting construction project profit in residential buildings using a combination of economic variables and indices (EV&Is) and physical and financial variables (P&F) as input variables remain uncertain. Although recent studies have primarily focused on identifying the factors influencing the sales of construction projects due to their significant short-term impact on a country's economy, the prediction of these parameters is crucial for ensuring project sustainability. While techniques such as regression and artificial neural networks have been utilized to estimate construction project sales, limited research has been conducted in this area. The application of machine learning techniques presents several advantages over conventional methods, including reductions in cost, time, and effort. Therefore, this study aims to predict the sales valuation of construction projects using various machine learning approaches, incorporating different EV&Is and P&F as input features for these models and subsequently generating the sales valuation as the output. This research will undertake a comparative analysis to investigate the efficiency of the different machine learning models, identifying the most effective approach for estimating the sales valuation of construction projects. By leveraging machine learning techniques, it is anticipated that the accuracy of sales valuation predictions will be enhanced, ultimately resulting in more sustainable and successful construction projects. In general, the findings of this research reveal that the extremely randomized trees model delivers the best performance, while the decision tree model exhibits the least satisfactory performance in predicting the sales valuation of construction projects

    Applications of Nearest Neighbor Search Algorithm Toward Efficient Rubber-Based Solid Waste Management in Concrete

    Get PDF
    Indeed, natural processes of discarding rubber waste have many disadvantages for the environment. As a result, multiple researchers suggested addressing this problem by recycling rubber as an aggregate in concrete mixtures. Previously, numerous studies have been undertaken experimentally to investigate the properties of rubberized concrete. Furthermore, investigations were carried out to develop estimating techniques to precisely specify the generated concrete's characteristics, making its use in real-life applications easier. However, there is still a gap in the conducted studies on the performance of the k-nearest neighbor algorithm. Hence, this research explores the accuracy of using the k-nearest neighbor's algorithm in predicting the compressive and tensile strength and the modulus of elasticity of rubberized concrete. It will be done by developing an optimized machine learning model using the aforementioned method and then benchmarking its results to the outcomes of multiple linear regression and artificial neural networks. The study's findings have shown that the k-nearest neighbor's algorithm provides significantly higher accuracy than other methods. This kind of study needs to be discussed in the literature so that people can better deal with rubber waste in concrete. Doi: 10.28991/CEJ-2022-08-04-06 Full Text: PD

    Influence of Earthquake Parameters on the Bi-directional Behavior of Base Isolation Systems

    Get PDF
    The introduction and development of the base isolation systems, especially the friction isolator device, were done recently to improve the capacity of adaptive behavior. The efficiency of multi-phase friction pendulums comes from their complexity, which helps reduce the structural responses and enhance structures' energy dissipation under lateral loads. Nevertheless, the influence of various earthquakes' properties on the behavior of base-isolation systems subjected to bi-directional seismic loading is still unclear. Hence, further research and studies regarding the behavior and capability of these systems under bi-directional loading are still necessary before incorporating this device in real-life practical applications. Therefore, this paper is intended to investigate the bi-directional behavior of the friction isolator subjected to various ground motion records. In order to do so, different versions of the friction pendulum system are selected and compared within the study context. Generally, the study's results have shown that the behavior of the friction isolator is highly dependent on low values of the PGA/PGV ratio. Besides, pulse-like earthquakes considerably impact the response of the isolator compared to non-pulse-like ones. Doi: 10.28991/CEJ-2022-08-10-02 Full Text: PD

    Parametric Assessment of Concrete Constituent Materials Using Machine Learning Techniques

    Get PDF
    Nowadays, technology has advanced, particularly in machine learning which is vital for minimizing the amount of human work required. Using machine learning approaches to estimate concrete properties has unquestionably triggered the interest of many researchers across the globe. Currently, an assessment method is widely adopted to calculate the impact of each input parameter on the output of a machine learning model. This paper evaluates the capability of various machine learning methodologies in conducting parametric assessments to understand the influence of each concrete constituent material on its compressive strength. It is accomplished by conducting a partial dependence analysis to quantify the effect of input features on the prediction results. As a part of the study, the effects of machine learning method selection for such analysis are also investigated by employing a concrete compressive strength algorithm developed using a decision tree, random forest, adaptive boosting, stochastic gradient boosting, and extreme gradient boosting. Additionally, the significance of the input features to the accuracy of the constructed estimation models is ranked through drop-out loss and MSE reduction. This investigation shows that the machine learning techniques could accurately predict the concrete's compressive strength with very high performance. Further, most analyzed algorithms yielded similar estimations regarding the strength of concrete constituent materials. In general, the study's results have shown that the drop-out loss and MSE reduction outputs were misleading, whereas the partial dependence plots provide a clear idea about the influence of the value of each feature on the prediction outcomes

    State-of-the-Art Review: Fiber-Reinforced Soil as a Proactive Approach for Liquefaction Mitigation and Risk Management

    No full text
    Soil liquefaction is a phenomenon that occurs in which the behavior of soils changes from solid to viscous liquid due to the effect of earthquake intensity or other sudden loadings. The earthquake results in excess pore water pressure, which leads to saturated loose soil with weaker characteristics and potentially causes large ground deformation and lateral spreading. Soil liquefaction is a dangerous event that can lead to catastrophic outcomes for humans and infrastructures, especially in countries prone to earthquake shaking, where soil liquefaction is considered one of the most prevalent types of ground failure. Hence, precautions to reduce and/or prevent soil liquefaction are essential and required. One of the countermeasures to avoid soil liquefaction is the introduction of fibers in the soil since fibers can act as reinforcement by enhancing the soil’s strength and resistance to liquefaction. The process of including fibers into the soil is known as soil stabilization and is considered one of the ground improvement techniques. Therefore, this paper aims to summarize and review the consequences of adding fiber as a reinforcement technique to overcome the issue of soil liquefaction

    Impact of interpolation techniques on the accuracy of large-scale digital elevation model

    No full text
    There is no doubt that the tremendous development of information technology was one of the driving factors behind the great growth of surveying and geodesy science. This has spawned modern geospatial techniques for data capturing, acquisition, and visualization tools. Digital elevation model (DEM) is the 3D depiction of continuous elevation data over the Earth’s surface that is produced through many procedures such as remote sensing, photogrammetry, and land surveying. DEMs are essential for various surveying and civil engineering applications to generate topographic maps for construction projects at a scale that varies from 1:500 to 1:2,000. GIS offers a powerful tool to create a DEM with high resolution from accurate land survey measurements using interpolation methods. The aim of this research is to investigate the impact of estimation techniques on generating a reliable and accurate DEM suitable for large-scale mapping. As a part of this study, the deterministic interpolation algorithms such as ANUDEM (Topo to Raster), inverse distance weighted (IDW), and triangulated irregular network (TIN) were tested using the ArcGIS desktop for elevation data obtained from real total station readings, with different landforms to show the effect of terrain roughness, data density, and interpolation process on DEM accuracy. Furthermore, comparison and validation of each interpolator were carried out through the cross-validation method and numerous graphical representations of the DEM. Finally, the results of the investigations showed that ANUDEM and TIN models are similar and significantly better than those attained from IDW

    Tabular Data Generation to Improve Classification of Liver Disease Diagnosis

    No full text
    Liver diseases are among the most common diseases worldwide. Because of the high incidence and high mortality rate, these diseases diagnoses are vital. Several elements harm the liver. For instance, obesity, undiagnosed hepatitis infection, and alcohol abuse. This causes abnormal nerve function, bloody coughing or vomiting, insufficient kidney function, hepatic failure, jaundice, and liver encephalopathy.. The diagnosis of this disease is very expensive and complex. Therefore, this work aims to assess the performance of various machine learning algorithms at decreasing the cost of predictive diagnoses of chronic liver disease. In this study, five machine learning algorithms were employed: Logistic Regression, K-Nearest Neighbor, Decision Tree, Support Vector Machine, and Artificial Neural Network (ANN) algorithm. In this work, we examined the effects of the increased prediction accuracy of Generative Adversarial Networks (GANs) and the synthetic minority oversampling technique (SMOTE). Generative opponents’ networks (GANs) are a mechanism to produce artificial data with a distribution close to real data distribution. This is achieved by training two different networks: the generator, which seeks to produce new and real samples, and the discriminator, which classifies the augmented samples using supervised classifications. Statistics show that the use of increased data slightly improves the performance of the classifier
    corecore